What is Foopipes

Foopipes is a lightweight integration engine for easy retrieval of data across sources, do data transformations and ingestion of data into various data stores or pushed to other external services. Foopipes can also expose its own endpoints and act as a service on its own.

It uses a message based workflow-like architecture for reliable operation with error queues and retries.

It runs in Docker and is configuration driven using yaml or json configuration files.

Data transformation is performed using C# scripts or Node.js modules. Scripts can either be .csx files or inlined in the configuration file. Node.js modules are written in Javascript or transpiled from Typescript or your language of choice.

Foopipes is scalable, distributed and robust.

The functionality of Foopipes can be extended with third-party plug-ins.

One of the design goals with Foopipes is the possibility to quickly set it up for common usage scenarios.

What is Foopipes

Purpose of Foopipes

  • Remove hard dependencies to external systems.
  • Create an abstraction between services.
  • Break monoliths when moving to a microservice architecture.
  • Transform, restructure and enrich data before consumption.
  • Combine data from multiple data sources.
  • Free text search index data from multiple data sources.

Problem Definition

Modern websites often have many dependencies to external systems, either on the internet or inside an organization. Often these systems don't have all the functionality, SLA, or performance to withstand the demands of business critical consumers. Thus, the saying is "when the wind it blows, the leaves tremble" is a reality that needs to be mitigated.

Also, the evolution of a system is often done in phases. When that is the case, it is preferable to break it up into smaller pieces where each piece can have its own life cycle, different demands, and different product owners. Still, much of the administration should be done in the same interface. The era of monoliths is coming to an end.

Vision

  • Data is more distributed than ever. Being dependent of many external sources makes it near impossible to build a robust system.
  • Multiple data consumers make schema changes to a challenge.
  • Technology is moving faster than ever. Abstraction is a necessity.
  • Aggregated data can provide more value than a set of data on its own.